57 research outputs found

    Reduced order modeling of fluid flows: Machine learning, Kolmogorov barrier, closure modeling, and partitioning

    Full text link
    In this paper, we put forth a long short-term memory (LSTM) nudging framework for the enhancement of reduced order models (ROMs) of fluid flows utilizing noisy measurements. We build on the fact that in a realistic application, there are uncertainties in initial conditions, boundary conditions, model parameters, and/or field measurements. Moreover, conventional nonlinear ROMs based on Galerkin projection (GROMs) suffer from imperfection and solution instabilities due to the modal truncation, especially for advection-dominated flows with slow decay in the Kolmogorov width. In the presented LSTM-Nudge approach, we fuse forecasts from a combination of imperfect GROM and uncertain state estimates, with sparse Eulerian sensor measurements to provide more reliable predictions in a dynamical data assimilation framework. We illustrate the idea with the viscous Burgers problem, as a benchmark test bed with quadratic nonlinearity and Laplacian dissipation. We investigate the effects of measurements noise and state estimate uncertainty on the performance of the LSTM-Nudge behavior. We also demonstrate that it can sufficiently handle different levels of temporal and spatial measurement sparsity. This first step in our assessment of the proposed model shows that the LSTM nudging could represent a viable realtime predictive tool in emerging digital twin systems

    Physics-guided machine learning for turbulence closure and reduced-order modeling

    Get PDF
    A recent advance in scientific machine learning has started to show promising results in fluid mechanics. Despite their early success, the application of data-driven methods to turbulent flow simulation is non-trivial due to underlying highly nonlinear multiscale interactions. Here we present novel physics-guided machine learning (PGML) approaches for turbulence closure model discovery and model order reduction of complex multiscale systems. Our turbulence closure model discovery approach is based on exploiting big data without relying on underlying turbulence physics and learning from physical constraints. Specifically, we propose a frame invariant neural network model that can incorporate physical symmetries as inductive biases and illustrates its stable performance in the coarse-grid simulation without any kind of post-processing of the predicted subgrid-scale closure model. The frame invariant SGS model guarantees desired physical constraints without the need for any regularization terms and ultimately generalizes to different initial conditions and Reynolds numbers. To achieve data-efficient training and improved generalization, we propose a concatenated neural network with an uncertainty quantification mechanism that leverages information from hierarchies of models. The concatenated neural network is based on embedding information from cheap to evaluate low-fidelity approximations into the certain hidden layer of the neural network both during training and deployment. This framework is demonstrated for a range of problems, including turbulent boundary layer reconstruction, and reduced-order modeling of the vortex merging process. Furthermore, we investigate the seamless integration of sparse and noisy observations into non-intrusive reduced-order models, and hybrid models where the dynamical core of the system is modeled using the known governing equations, and the subgrid-scale processes are modeled using a deep learning model. To summarize, this work builds a bridge between extensive physics-based theories and data-driven modeling paradigms and paves the way for using hybrid physics-informed learning algorithms to generate predictive technologies for turbulent fluid flows

    Numerical assessments of a nonintrusive surrogate model based on recurrent neural networks and proper orthogonal decomposition: Rayleigh Benard convection

    Full text link
    Recent developments in diagnostic and computing technologies offer to leverage numerous forms of nonintrusive modeling approaches from data where machine learning can be used to build computationally cheap and accurate surrogate models. To this end, we present a nonlinear proper orthogonal decomposition (POD) framework, denoted as NLPOD, to forge a nonintrusive reduced-order model for the Boussinesq equations. In our NLPOD approach, we first employ the POD procedure to obtain a set of global modes to build a linear-fit latent space and utilize an autoencoder network to compress the projection of this latent space through a nonlinear unsupervised mapping of POD coefficients. Then, long short-term memory (LSTM) neural network architecture is utilized to discover temporal patterns in this low-rank manifold. While performing a detailed sensitivity analysis for hyperparameters of the LSTM model, the trade-off between accuracy and efficiency is systematically analyzed for solving a canonical Rayleigh-Benard convection system

    Variational multiscale reinforcement learning for discovering reduced order closure models of nonlinear spatiotemporal transport systems

    Get PDF
    A central challenge in the computational modeling and simulation of a multitude of science applications is to achieve robust and accurate closures for their coarse-grained representations due to underlying highly nonlinear multiscale interactions. These closure models are common in many nonlinear spatiotemporal systems to account for losses due to reduced order representations, including many transport phenomena in fluids. Previous data-driven closure modeling efforts have mostly focused on supervised learning approaches using high fidelity simulation data. On the other hand, reinforcement learning (RL) is a powerful yet relatively uncharted method in spatiotemporally extended systems. In this study, we put forth a modular dynamic closure modeling and discovery framework to stabilize the Galerkin projection based reduced order models that may arise in many nonlinear spatiotemporal dynamical systems with quadratic nonlinearity. However, a key element in creating a robust RL agent is to introduce a feasible reward function, which can be constituted of any difference metrics between the RL model and high fidelity simulation data. First, we introduce a multi-modal RL to discover mode-dependant closure policies that utilize the high fidelity data in rewarding our RL agent. We then formulate a variational multiscale RL (VMRL) approach to discover closure models without requiring access to the high fidelity data in designing the reward function. Specifically, our chief innovation is to leverage variational multiscale formalism to quantify the difference between modal interactions in Galerkin systems. Our results in simulating the viscous Burgers equation indicate that the proposed VMRL method leads to robust and accurate closure parameterizations, and it may potentially be used to discover scale-aware closure models for complex dynamical systems.publishedVersio

    Physics guided neural networks for modelling of non-linear dynamics

    Get PDF
    The success of the current wave of artificial intelligence can be partly attributed to deep neural networks, which have proven to be very effective in learning complex patterns from large datasets with minimal human intervention. However, it is difficult to train these models on complex dynamical systems from data alone due to their low data efficiency and sensitivity to hyperparameters and initialisation. This work demonstrates that injection of partially known information at an intermediate layer in a DNN can improve model accuracy, reduce model uncertainty, and yield improved convergence during the training. The value of these physics-guided neural networks has been demonstrated by learning the dynamics of a wide variety of nonlinear dynamical systems represented by five well-known equations in nonlinear systems theory: the Lotka–Volterra, Duffing, Van der Pol, Lorenz, and Henon–Heiles systems.publishedVersio

    A priori analysis on deep learning of subgrid-scale parameterizations for Kraichnan turbulence

    Full text link
    In the present study, we investigate different data-driven parameterizations for large eddy simulation of two-dimensional turbulence in the \emph{a priori} settings. These models utilize resolved flow field variables on the coarser grid to estimate the subgrid-scale stresses. We use data-driven closure models based on localized learning that employs multilayer feedforward artificial neural network (ANN) with point-to-point mapping and neighboring stencil data mapping, and convolutional neural network (CNN) fed by data snapshots of the whole domain. The performance of these data-driven closure models is measured through a probability density function and is compared with the dynamic Smagorinksy model (DSM). The quantitative performance is evaluated using the cross-correlation coefficient between the true and predicted stresses. We analyze different frameworks in terms of the amount of training data, selection of input and output features, their characteristics in modeling with accuracy, and training and deployment computational time. We also demonstrate computational gain that can be achieved using the intelligent eddy viscosity model that learns eddy viscosity computed by the DSM instead of subgrid-scale stresses. We detail the hyperparameters optimization of these models using the grid search algorithm
    • …
    corecore